Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.358
Filtrar
1.
Multisens Res ; 37(2): 125-141, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38714314

RESUMEN

Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this social trait. Two of these cues are the pitch of the voice and the width-to-height ratio of the face (fWHR). Additionally, research has indicated that the content of a spoken sentence itself has an effect on trustworthiness; a finding that has not yet been brought into multisensory research. The current research aims to investigate previously developed theories on trust in relation to vocal pitch, fWHR, and sentence content in a multimodal setting. Twenty-six female participants were asked to judge the trustworthiness of a voice speaking a neutral or romantic sentence while seeing a face. The average pitch of the voice and the fWHR were varied systematically. Results indicate that the content of the spoken message was an important predictor of trustworthiness extending into multimodality. Further, the mean pitch of the voice and fWHR of the face appeared to be useful indicators in a multimodal setting. These effects interacted with one another across modalities. The data demonstrate that trust in the voice is shaped by task-irrelevant visual stimuli. Future research is encouraged to clarify whether these findings remain consistent across genders, age groups, and languages.


Asunto(s)
Cara , Confianza , Voz , Humanos , Femenino , Voz/fisiología , Adulto Joven , Adulto , Cara/fisiología , Percepción del Habla/fisiología , Percepción de la Altura Tonal/fisiología , Reconocimiento Facial/fisiología , Señales (Psicología) , Adolescente
2.
J Psychiatry Neurosci ; 49(3): E145-E156, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38692692

RESUMEN

BACKGROUND: Neuroimaging studies have revealed abnormal functional interaction during the processing of emotional faces in patients with major depressive disorder (MDD), thereby enhancing our comprehension of the pathophysiology of MDD. However, it is unclear whether there is abnormal directional interaction among face-processing systems in patients with MDD. METHODS: A group of patients with MDD and a healthy control group underwent a face-matching task during functional magnetic resonance imaging. Dynamic causal modelling (DCM) analysis was used to investigate effective connectivity between 7 regions in the face-processing systems. We used a Parametric Empirical Bayes model to compare effective connectivity between patients with MDD and controls. RESULTS: We included 48 patients and 44 healthy controls in our analyses. Both groups showed higher accuracy and faster reaction time in the shape-matching condition than in the face-matching condition. However, no significant behavioural or brain activation differences were found between the groups. Using DCM, we found that, compared with controls, patients with MDD showed decreased self-connection in the right dorsolateral prefrontal cortex (DLPFC), amygdala, and fusiform face area (FFA) across task conditions; increased intrinsic connectivity from the right amygdala to the bilateral DLPFC, right FFA, and left amygdala, suggesting an increased intrinsic connectivity centred in the amygdala in the right side of the face-processing systems; both increased and decreased positive intrinsic connectivity in the left side of the face-processing systems; and comparable task modulation effect on connectivity. LIMITATIONS: Our study did not include longitudinal neuroimaging data, and there was limited region of interest selection in the DCM analysis. CONCLUSION: Our findings provide evidence for a complex pattern of alterations in the face-processing systems in patients with MDD, potentially involving the right amygdala to a greater extent. The results confirm some previous findings and highlight the crucial role of the regions on both sides of face-processing systems in the pathophysiology of MDD.


Asunto(s)
Amígdala del Cerebelo , Trastorno Depresivo Mayor , Reconocimiento Facial , Imagen por Resonancia Magnética , Humanos , Trastorno Depresivo Mayor/fisiopatología , Trastorno Depresivo Mayor/diagnóstico por imagen , Masculino , Femenino , Adulto , Reconocimiento Facial/fisiología , Amígdala del Cerebelo/diagnóstico por imagen , Amígdala del Cerebelo/fisiopatología , Encéfalo/diagnóstico por imagen , Encéfalo/fisiopatología , Vías Nerviosas/fisiopatología , Vías Nerviosas/diagnóstico por imagen , Teorema de Bayes , Adulto Joven , Mapeo Encefálico , Expresión Facial , Persona de Mediana Edad , Tiempo de Reacción/fisiología
3.
Sci Rep ; 14(1): 10040, 2024 05 02.
Artículo en Inglés | MEDLINE | ID: mdl-38693189

RESUMEN

Investigation of visual illusions helps us understand how we process visual information. For example, face pareidolia, the misperception of illusory faces in objects, could be used to understand how we process real faces. However, it remains unclear whether this illusion emerges from errors in face detection or from slower, cognitive processes. Here, our logic is straightforward; if examples of face pareidolia activate the mechanisms that rapidly detect faces in visual environments, then participants will look at objects more quickly when the objects also contain illusory faces. To test this hypothesis, we sampled continuous eye movements during a fast saccadic choice task-participants were required to select either faces or food items. During this task, pairs of stimuli were positioned close to the initial fixation point or further away, in the periphery. As expected, the participants were faster to look at face targets than food targets. Importantly, we also discovered an advantage for food items with illusory faces but, this advantage was limited to the peripheral condition. These findings are among the first to demonstrate that the face pareidolia illusion persists in the periphery and, thus, it is likely to be a consequence of erroneous face detection.


Asunto(s)
Ilusiones , Humanos , Femenino , Masculino , Adulto , Ilusiones/fisiología , Adulto Joven , Percepción Visual/fisiología , Estimulación Luminosa , Cara/fisiología , Reconocimiento Facial/fisiología , Movimientos Oculares/fisiología , Reconocimiento Visual de Modelos/fisiología
4.
Cereb Cortex ; 34(13): 172-186, 2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38696606

RESUMEN

Individuals with autism spectrum disorder (ASD) experience pervasive difficulties in processing social information from faces. However, the behavioral and neural mechanisms underlying social trait judgments of faces in ASD remain largely unclear. Here, we comprehensively addressed this question by employing functional neuroimaging and parametrically generated faces that vary in facial trustworthiness and dominance. Behaviorally, participants with ASD exhibited reduced specificity but increased inter-rater variability in social trait judgments. Neurally, participants with ASD showed hypo-activation across broad face-processing areas. Multivariate analysis based on trial-by-trial face responses could discriminate participant groups in the majority of the face-processing areas. Encoding social traits in ASD engaged vastly different face-processing areas compared to controls, and encoding different social traits engaged different brain areas. Interestingly, the idiosyncratic brain areas encoding social traits in ASD were still flexible and context-dependent, similar to neurotypicals. Additionally, participants with ASD also showed an altered encoding of facial saliency features in the eyes and mouth. Together, our results provide a comprehensive understanding of the neural mechanisms underlying social trait judgments in ASD.


Asunto(s)
Trastorno del Espectro Autista , Encéfalo , Reconocimiento Facial , Imagen por Resonancia Magnética , Percepción Social , Humanos , Trastorno del Espectro Autista/fisiopatología , Trastorno del Espectro Autista/diagnóstico por imagen , Trastorno del Espectro Autista/psicología , Masculino , Femenino , Adulto , Adulto Joven , Reconocimiento Facial/fisiología , Encéfalo/fisiopatología , Encéfalo/diagnóstico por imagen , Juicio/fisiología , Mapeo Encefálico , Adolescente
5.
Sci Rep ; 14(1): 10304, 2024 05 05.
Artículo en Inglés | MEDLINE | ID: mdl-38705917

RESUMEN

Understanding neurogenetic mechanisms underlying neuropsychiatric disorders such as schizophrenia and autism is complicated by their inherent clinical and genetic heterogeneity. Williams syndrome (WS), a rare neurodevelopmental condition in which both the genetic alteration (hemideletion of ~ twenty-six 7q11.23 genes) and the cognitive/behavioral profile are well-defined, offers an invaluable opportunity to delineate gene-brain-behavior relationships. People with WS are characterized by increased social drive, including particular interest in faces, together with hallmark difficulty in visuospatial processing. Prior work, primarily in adults with WS, has searched for neural correlates of these characteristics, with reports of altered fusiform gyrus function while viewing socioemotional stimuli such as faces, along with hypoactivation of the intraparietal sulcus during visuospatial processing. Here, we investigated neural function in children and adolescents with WS by using four separate fMRI paradigms, two that probe each of these two cognitive/behavioral domains. During the two visuospatial tasks, but not during the two face processing tasks, we found bilateral intraparietal sulcus hypoactivation in WS. In contrast, during both face processing tasks, but not during the visuospatial tasks, we found fusiform hyperactivation. These data not only demonstrate that previous findings in adults with WS are also present in childhood and adolescence, but also provide a clear example that genetic mechanisms can bias neural circuit function, thereby affecting behavioral traits.


Asunto(s)
Imagen por Resonancia Magnética , Síndrome de Williams , Humanos , Síndrome de Williams/fisiopatología , Síndrome de Williams/genética , Síndrome de Williams/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Adolescente , Niño , Femenino , Masculino , Mapeo Encefálico/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/fisiopatología , Cara , Reconocimiento Facial/fisiología , Lóbulo Parietal/fisiopatología , Lóbulo Parietal/diagnóstico por imagen , Percepción Espacial/fisiología
6.
Cogn Res Princ Implic ; 9(1): 25, 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38652383

RESUMEN

The use of face coverings can make communication more difficult by removing access to visual cues as well as affecting the physical transmission of speech sounds. This study aimed to assess the independent and combined contributions of visual and auditory cues to impaired communication when using face coverings. In an online task, 150 participants rated videos of natural conversation along three dimensions: (1) how much they could follow, (2) how much effort was required, and (3) the clarity of the speech. Visual and audio variables were independently manipulated in each video, so that the same video could be presented with or without a superimposed surgical-style mask, accompanied by one of four audio conditions (either unfiltered audio, or audio-filtered to simulate the attenuation associated with a surgical mask, an FFP3 mask, or a visor). Hypotheses and analyses were pre-registered. Both the audio and visual variables had a statistically significant negative impact across all three dimensions. Whether or not talkers' faces were visible made the largest contribution to participants' ratings. The study identifies a degree of attenuation whose negative effects can be overcome by the restoration of visual cues. The significant effects observed in this nominally low-demand task (speech in quiet) highlight the importance of the visual and audio cues in everyday life and that their consideration should be included in future face mask designs.


Asunto(s)
Señales (Psicología) , Percepción del Habla , Humanos , Adulto , Femenino , Masculino , Adulto Joven , Percepción del Habla/fisiología , Percepción Visual/fisiología , Máscaras , Adolescente , Habla/fisiología , Comunicación , Persona de Mediana Edad , Reconocimiento Facial/fisiología
7.
BMC Psychiatry ; 24(1): 307, 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38654234

RESUMEN

BACKGROUND: Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a chronic breathing disorder characterized by recurrent upper airway obstruction during sleep. Although previous studies have shown a link between OSAHS and depressive mood, the neurobiological mechanisms underlying mood disorders in OSAHS patients remain poorly understood. This study aims to investigate the emotion processing mechanism in OSAHS patients with depressive mood using event-related potentials (ERPs). METHODS: Seventy-four OSAHS patients were divided into the depressive mood and non-depressive mood groups according to their Self-rating Depression Scale (SDS) scores. Patients underwent overnight polysomnography and completed various cognitive and emotional questionnaires. The patients were shown facial images displaying positive, neutral, and negative emotions and tasked to identify the emotion category, while their visual evoked potential was simultaneously recorded. RESULTS: The two groups did not differ significantly in age, BMI, and years of education, but showed significant differences in their slow wave sleep ratio (P = 0.039), ESS (P = 0.006), MMSE (P < 0.001), and MOCA scores (P = 0.043). No significant difference was found in accuracy and response time on emotional face recognition between the two groups. N170 latency in the depressive group was significantly longer than the non-depressive group (P = 0.014 and 0.007) at the bilateral parieto-occipital lobe, while no significant difference in N170 amplitude was found. No significant difference in P300 amplitude or latency between the two groups. Furthermore, N170 amplitude at PO7 was positively correlated with the arousal index and negatively with MOCA scores (both P < 0.01). CONCLUSION: OSAHS patients with depressive mood exhibit increased N170 latency and impaired facial emotion recognition ability. Special attention towards the depressive mood among OSAHS patients is warranted for its implications for patient care.


Asunto(s)
Depresión , Emociones , Apnea Obstructiva del Sueño , Humanos , Masculino , Persona de Mediana Edad , Apnea Obstructiva del Sueño/fisiopatología , Apnea Obstructiva del Sueño/psicología , Apnea Obstructiva del Sueño/complicaciones , Depresión/fisiopatología , Depresión/psicología , Depresión/complicaciones , Femenino , Adulto , Emociones/fisiología , Polisomnografía , Potenciales Evocados/fisiología , Electroencefalografía , Reconocimiento Facial/fisiología , Potenciales Evocados Visuales/fisiología , Expresión Facial
8.
Artículo en Inglés | MEDLINE | ID: mdl-38607744

RESUMEN

The purpose of this work is to analyze how new technologies can enhance clinical practice while also examining the physical traits of emotional expressiveness of face expression in a number of psychiatric illnesses. Hence, in this work, an automatic facial expression recognition system has been proposed that analyzes static, sequential, or video facial images from medical healthcare data to detect emotions in people's facial regions. The proposed method has been implemented in five steps. The first step is image preprocessing, where a facial region of interest has been segmented from the input image. The second component includes a classical deep feature representation and the quantum part that involves successive sets of quantum convolutional layers followed by random quantum variational circuits for feature learning. Here, the proposed system has attained a faster training approach using the proposed quantum convolutional neural network approach that takes [Formula: see text] time. In contrast, the classical convolutional neural network models have [Formula: see text] time. Additionally, some performance improvement techniques, such as image augmentation, fine-tuning, matrix normalization, and transfer learning methods, have been applied to the recognition system. Finally, the scores due to classical and quantum deep learning models are fused to improve the performance of the proposed method. Extensive experimentation with Karolinska-directed emotional faces (KDEF), Static Facial Expressions in the Wild (SFEW 2.0), and Facial Expression Recognition 2013 (FER-2013) benchmark databases and compared with other state-of-the-art methods that show the improvement of the proposed system.


Asunto(s)
Reconocimiento Facial , Salud Mental , Humanos , Benchmarking , Bases de Datos Factuales , Redes Neurales de la Computación
9.
Sensors (Basel) ; 24(7)2024 Apr 04.
Artículo en Inglés | MEDLINE | ID: mdl-38610510

RESUMEN

The perception of sound greatly impacts users' emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors' responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors' emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors' questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the 'distracted' state and 62 percent in the 'heavy-eyed' state. On the other hand, regression analysis shows that the correlation between jurors' valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants' reactions to auditory stimuli.


Asunto(s)
Reconocimiento Facial , Humanos , Reproducibilidad de los Resultados , Acústica , Sonido , Emociones
10.
Sci Rep ; 14(1): 8121, 2024 04 07.
Artículo en Inglés | MEDLINE | ID: mdl-38582772

RESUMEN

This paper proposes an improved strategy for the MobileNetV2 neural network(I-MobileNetV2) in response to problems such as large parameter quantities in existing deep convolutional neural networks and the shortcomings of the lightweight neural network MobileNetV2 such as easy loss of feature information, poor real-time performance, and low accuracy rate in facial emotion recognition tasks. The network inherits the characteristics of MobilenetV2 depthwise separated convolution, signifying a reduction in computational load while maintaining a lightweight profile. It utilizes a reverse fusion mechanism to retain negative features, which makes the information less likely to be lost. The SELU activation function is used to replace the RELU6 activation function to avoid gradient vanishing. Meanwhile, to improve the feature recognition capability, the channel attention mechanism (Squeeze-and-Excitation Networks (SE-Net)) is integrated into the MobilenetV2 network. Experiments conducted on the facial expression datasets FER2013 and CK + showed that the proposed network model achieved facial expression recognition accuracies of 68.62% and 95.96%, improving upon the MobileNetV2 model by 0.72% and 6.14% respectively, and the parameter count decreased by 83.8%. These results empirically verify the effectiveness of the improvements made to the network model.


Asunto(s)
Lesiones Accidentales , Reconocimiento Facial , Humanos , Redes Neurales de la Computación , Reconocimiento en Psicología
11.
Sci Rep ; 14(1): 9402, 2024 04 24.
Artículo en Inglés | MEDLINE | ID: mdl-38658575

RESUMEN

Perceptual decisions are derived from the combination of priors and sensorial input. While priors are broadly understood to reflect experience/expertise developed over one's lifetime, the role of perceptual expertise at the individual level has seldom been directly explored. Here, we manipulate probabilistic information associated with a high and low expertise category (faces and cars respectively), while assessing individual level of expertise with each category. 67 participants learned the probabilistic association between a color cue and each target category (face/car) in a behavioural categorization task. Neural activity (EEG) was then recorded in a similar paradigm in the same participants featuring the previously learned contingencies without the explicit task. Behaviourally, perception of the higher expertise category (faces) was modulated by expectation. Specifically, we observed facilitatory and interference effects when targets were correctly or incorrectly expected, which were also associated with independently measured individual levels of face expertise. Multivariate pattern analysis of the EEG signal revealed clear effects of expectation from 100 ms post stimulus, with significant decoding of the neural response to expected vs. not stimuli, when viewing identical images. Latency of peak decoding when participants saw faces was directly associated with individual level facilitation effects in the behavioural task. The current results not only provide time sensitive evidence of expectation effects on early perception but highlight the role of higher-level expertise on forming priors.


Asunto(s)
Electroencefalografía , Reconocimiento Facial , Humanos , Masculino , Femenino , Adulto , Reconocimiento Facial/fisiología , Adulto Joven , Estimulación Luminosa , Tiempo de Reacción/fisiología , Percepción Visual/fisiología , Cara/fisiología
12.
Sci Rep ; 14(1): 9418, 2024 04 24.
Artículo en Inglés | MEDLINE | ID: mdl-38658628

RESUMEN

Pupil contagion refers to the observer's pupil-diameter changes in response to changes in the pupil diameter of others. Recent studies on the other-race effect on pupil contagion have mainly focused on using eye region images as stimuli, revealing the effect in adults but not in infants. To address this research gap, the current study used whole-face images as stimuli to assess the pupil-diameter response of 5-6-month-old and 7-8-month-old infants to changes in the pupil-diameter of both upright and inverted unfamiliar-race faces. The study initially hypothesized that there would be no pupil contagion in either upright or inverted unfamiliar-race faces, based on our previous finding of pupil contagion occurring only in familiar-race faces among 5-6-month-old infants. Notably, the current results indicated that 5-6-month-old infants exhibited pupil contagion in both upright and inverted unfamiliar-race faces, while 7-8-month-old infants showed this effect only in upright unfamiliar-race faces. These results demonstrate that the face inversion effect of pupil contagion does not occur in 5-6-month-old infants, thereby suggesting the presence of the other-race effect in pupil contagion among this age group. Overall, this study provides the first evidence of the other-race effect on infants' pupil contagion using face stimuli.


Asunto(s)
Pupila , Humanos , Pupila/fisiología , Lactante , Masculino , Femenino , Estimulación Luminosa , Reconocimiento Facial/fisiología
13.
Brain Res Bull ; 211: 110946, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38614407

RESUMEN

Post-traumatic stress disorder (PTSD) is associated with abnormalities in the processing and regulation of emotion as well as cognitive deficits. This study evaluated the differential brain activation patterns associated with cognitive and emotional distractors during working memory (WM) maintenance for human faces between patients with PTSD and healthy controls (HCs) and assessed the relationship between changes in the activation patterns by the opposing effects of distraction types and gray matter volume (GMV). Twenty-two patients with PTSD and twenty-two HCs underwent T1-weighted magnetic resonance imaging (MRI) and event-related functional MRI (fMRI), respectively. Event-related fMRI data were recorded while subjects performed a delayed-response WM task with human face and trauma-related distractors. Compared to the HCs, the patients with PTSD showed significantly reduced GMV of the inferior frontal gyrus (IFG) (p < 0.05, FWE-corrected). For the human face distractor trial, the patients showed significantly decreased activities in the superior frontal gyrus and IFG compared with HCs (p < 0.05, FWE-corrected). The patients showed lower accuracy scores and slower reaction times for the face recognition task with trauma-related distractors compared with HCs as well as significantly increased brain activity in the STG during the trauma-related distractor trial was observed (p < 0.05, FWE-corrected). Such differential brain activation patterns associated with the effects of distraction in PTSD patients may be linked to neural mechanisms associated with impairments in both cognitive control for confusable distractors and the ability to control emotional distraction.


Asunto(s)
Encéfalo , Emociones , Imagen por Resonancia Magnética , Memoria a Corto Plazo , Trastornos por Estrés Postraumático , Humanos , Trastornos por Estrés Postraumático/fisiopatología , Trastornos por Estrés Postraumático/diagnóstico por imagen , Trastornos por Estrés Postraumático/patología , Masculino , Memoria a Corto Plazo/fisiología , Adulto , Femenino , Emociones/fisiología , Encéfalo/fisiopatología , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Cognición/fisiología , Mapeo Encefálico , Adulto Joven , Reconocimiento Facial/fisiología , Tiempo de Reacción/fisiología , Persona de Mediana Edad , Sustancia Gris/diagnóstico por imagen , Sustancia Gris/patología , Sustancia Gris/fisiopatología , Atención/fisiología
14.
Cogn Emot ; 38(3): 296-314, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38678446

RESUMEN

Social exclusion is an emotionally painful experience that leads to various alterations in socio-emotional processing. The perceptual and emotional consequences that may arise from experiencing social exclusion can vary depending on the paradigm used to manipulate it. Exclusion paradigms can vary in terms of the severity and duration of the leading exclusion experience, thereby classifying it as either a short-term or long-term experience. The present study aimed to study the impact of exclusion on socio-emotional processing using different paradigms that caused experiencing short-term and imagining long-term exclusion. Ambiguous facial emotions were used as socio-emotional cues. In study 1, the Ostracism Online paradigm was used to manipulate short-term exclusion. In study 2, a new sample of participants imagined long-term exclusion through the future life alone paradigm. Participants of both studies then completed a facial emotion recognition task consisting of morphed ambiguous facial emotions. By means of Point of Subjective Equivalence analyses, our results indicate that the experience of short-term exclusion hinders recognising happy facial expressions. In contrast, imagining long-term exclusion causes difficulties in recognising sad facial expressions. These findings extend the current literature, suggesting that not all social exclusion paradigms affect socio-emotional processing similarly.


Asunto(s)
Emociones , Expresión Facial , Humanos , Femenino , Masculino , Adulto Joven , Adulto , Reconocimiento Facial , Distancia Psicológica , Aislamiento Social/psicología , Reconocimiento en Psicología , Adolescente
15.
Cereb Cortex ; 34(4)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38679483

RESUMEN

Prior research has yet to fully elucidate the impact of varying relative saliency between target and distractor on attentional capture and suppression, along with their underlying neural mechanisms, especially when social (e.g. face) and perceptual (e.g. color) information interchangeably serve as singleton targets or distractors, competing for attention in a search array. Here, we employed an additional singleton paradigm to investigate the effects of relative saliency on attentional capture (as assessed by N2pc) and suppression (as assessed by PD) of color or face singleton distractors in a visual search task by recording event-related potentials. We found that face singleton distractors with higher relative saliency induced stronger attentional processing. Furthermore, enhancing the physical salience of colors using a bold color ring could enhance attentional processing toward color singleton distractors. Reducing the physical salience of facial stimuli by blurring weakened attentional processing toward face singleton distractors; however, blurring enhanced attentional processing toward color singleton distractors because of the change in relative saliency. In conclusion, the attentional processes of singleton distractors are affected by their relative saliency to singleton targets, with higher relative saliency of singleton distractors resulting in stronger attentional capture and suppression; faces, however, exhibit some specificity in attentional capture and suppression due to high social saliency.


Asunto(s)
Atención , Percepción de Color , Electroencefalografía , Potenciales Evocados , Humanos , Atención/fisiología , Femenino , Masculino , Adulto Joven , Potenciales Evocados/fisiología , Adulto , Percepción de Color/fisiología , Estimulación Luminosa/métodos , Reconocimiento Facial/fisiología , Reconocimiento Visual de Modelos/fisiología , Encéfalo/fisiología
16.
J Exp Child Psychol ; 243: 105928, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38643735

RESUMEN

Previous studies have shown that adults exhibit the strongest attentional bias toward neutral infant faces when viewing faces with different expressions at different attentional processing stages due to different stimulus presentation times. However, it is not clear how the characteristics of the temporal processing associated with the strongest effect change over time. Thus, we combined a free-viewing task with eye-tracking technology to measure adults' attentional bias toward infant and adult faces with happy, neutral, and sad expressions of the same face. The results of the analysis of the total time course indicated that the strongest effect occurred during the strategic processing stage. However, the results of the analysis of the split time course revealed that sad infant faces first elicited adults' attentional bias at 0 to 500 ms, whereas the strongest effect of attentional bias toward neutral infant faces was observed at 1000 to 3000 ms, peaking at 1500 to 2000 ms. In addition, women and men had no differences in their responses to different expressions. In summary, this study provides further evidence that adults' attentional bias toward infant faces across stages of attention processing is modulated by expressions. Specifically, during automatic processing adults' attentional bias was directed toward sad infant faces, followed by a shift to the processing of neutral infant faces during strategic processing, which ultimately resulted in the strongest effect. These findings highlight that this strongest effect is dynamic and associated with a specific time window in the strategic process.


Asunto(s)
Sesgo Atencional , Expresión Facial , Reconocimiento Facial , Humanos , Femenino , Masculino , Sesgo Atencional/fisiología , Adulto Joven , Adulto , Reconocimiento Facial/fisiología , Lactante , Tecnología de Seguimiento Ocular , Atención , Factores de Tiempo
17.
Zhejiang Da Xue Xue Bao Yi Xue Ban ; 53(2): 254-260, 2024 Apr 25.
Artículo en Inglés, Chino | MEDLINE | ID: mdl-38650447

RESUMEN

Attention deficit and hyperactive disorder (ADHD) is a chronic neurodevelopmental disorder characterized by inattention, hyperactivity-impulsivity, and working memory deficits. Social dysfunction is one of the major challenges faced by children with ADHD. It has been found that children with ADHD can't perform as well as typically developing children on facial expression recognition (FER) tasks. Generally, children with ADHD have some difficulties in FER, while some studies suggest that they have no significant differences in accuracy of specific emotion recognition compared with typically developing children. The neuropsychological mechanisms underlying these difficulties are as follows. First, neuroanatomically. Compared to typically developing children, children with ADHD show smaller gray matter volume and surface area in the amygdala and medial prefrontal cortex regions, as well as reduced density and volume of axons/cells in certain frontal white matter fiber tracts. Second, neurophysiologically. Children with ADHD exhibit increased slow-wave activity in their electroencephalogram, and event-related potential studies reveal abnormalities in emotional regulation and responses to angry faces when facing facial stimuli. Third, psychologically. Psychosocial stressors may influence FER abilities in children with ADHD, and sleep deprivation in ADHD children may significantly increase their recognition threshold for negative expressions such as sadness and anger. This article reviews research progress over the past three years on FER abilities of children with ADHD, analyzing the FER deficit in children with ADHD from three dimensions: neuroanatomy, neurophysiology and psychology, aiming to provide new perspectives for further research and clinical treatment of ADHD.


Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad , Expresión Facial , Humanos , Trastorno por Déficit de Atención con Hiperactividad/fisiopatología , Trastorno por Déficit de Atención con Hiperactividad/psicología , Niño , Reconocimiento Facial/fisiología , Emociones
18.
Sci Rep ; 14(1): 9794, 2024 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-38684721

RESUMEN

Face perception is a major topic in vision research. Most previous research has concentrated on (holistic) spatial representations of faces, often with static faces as stimuli. However, faces are highly dynamic stimuli containing important temporal information. How sensitive humans are regarding temporal information in dynamic faces is not well understood. Studies investigating temporal information in dynamic faces usually focus on the processing of emotional expressions. However, faces also contain relevant temporal information without any strong emotional expression. To investigate cues that modulate human sensitivity to temporal order, we utilized muted dynamic neutral face videos in two experiments. We varied the orientation of the faces (upright and inverted) and the presence/absence of eye blinks as partial dynamic cues. Participants viewed short, muted, monochromic videos of models vocalizing a widely known text (National Anthem). Videos were played either forward (in the correct temporal order) or backward. Participants were asked to determine the direction of the temporal order for each video, and (at the end of the experiment) whether they had understood the speech. We found that face orientation, and the presence/absence of an eye blink affected sensitivity, criterion (bias) and reaction time: Overall, sensitivity was higher for upright compared to inverted faces, and in the condition where an eye blink was present compared to the condition without an eye blink. Reaction times were mostly faster in the conditions with higher sensitivity. A bias to report inverted faces as 'backward' observed in Experiment I, where upright and inverted faces were presented randomly interleaved within each block, was absent when presenting upright and inverted faces in different blocks in Experiment II. Language comprehension results revealed that there was higher sensitivity when understanding the speech compared to not understanding the speech in both experiments. Taken together, our results showed higher sensitivity with upright compared to inverted faces, suggesting that the perception of dynamic, task-relevant information was superior with the canonical orientation of the faces. Furthermore, partial information coming from eye blinks, in addition to mouth movements, seemed to play a significant role in dynamic face perception, both when faces were presented upright and inverted. We suggest that studying the perception of facial dynamics beyond emotional expressions will help us to better understand the mechanisms underlying the temporal integration of facial information from different -partial and holistic- sources, and that our results show how different strategies, depending on the available information, are employed by human observers when judging the temporal order of faces.


Asunto(s)
Reconocimiento Facial , Humanos , Femenino , Masculino , Reconocimiento Facial/fisiología , Adulto , Adulto Joven , Tiempo de Reacción/fisiología , Expresión Facial , Parpadeo/fisiología , Estimulación Luminosa/métodos , Emociones/fisiología , Cara/fisiología , Señales (Psicología)
19.
Nat Commun ; 15(1): 3407, 2024 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-38649694

RESUMEN

The perception and neural processing of sensory information are strongly influenced by prior expectations. The integration of prior and sensory information can manifest through distinct underlying mechanisms: focusing on unexpected input, denoted as prediction error (PE) processing, or amplifying anticipated information via sharpened representation. In this study, we employed computational modeling using deep neural networks combined with representational similarity analyses of fMRI data to investigate these two processes during face perception. Participants were cued to see face images, some generated by morphing two faces, leading to ambiguity in face identity. We show that expected faces were identified faster and perception of ambiguous faces was shifted towards priors. Multivariate analyses uncovered evidence for PE processing across and beyond the face-processing hierarchy from the occipital face area (OFA), via the fusiform face area, to the anterior temporal lobe, and suggest sharpened representations in the OFA. Our findings support the proposition that the brain represents faces grounded in prior expectations.


Asunto(s)
Mapeo Encefálico , Reconocimiento Facial , Imagen por Resonancia Magnética , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Reconocimiento Facial/fisiología , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Lóbulo Temporal/fisiología , Lóbulo Temporal/diagnóstico por imagen , Cara , Estimulación Luminosa , Redes Neurales de la Computación , Lóbulo Occipital/fisiología , Lóbulo Occipital/diagnóstico por imagen , Reconocimiento Visual de Modelos/fisiología , Percepción Visual/fisiología
20.
J Pers Soc Psychol ; 126(3): 390-412, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38647440

RESUMEN

There is abundant evidence that emotion categorization is influenced by the social category membership of target faces, with target sex and target race modulating the ease with which perceivers can categorize happy and angry emotional expressions. However, theoretical interpretation of these findings is constrained by gender and race imbalances in both the participant samples and target faces typically used when demonstrating these effects (e.g., most participants have been White women and most Black targets have been men). Across seven experiments, the current research used gender-matched samples (Experiments 1a and 1b), gender- and racial identity-matched samples (Experiments 2a and 2b), and manipulations of social context (Experiments 3a, 3b, and 4) to establish whether emotion categorization is influenced by interactions between the social category membership of perceivers and target faces. Supporting this idea, we found the presence and size of the happy face advantage were influenced by interactions between perceivers and target social categories, with reliable happy face advantages in reaction times for ingroup targets but not necessarily for outgroup targets. White targets and female targets were the only categories associated with a reliable happy face advantage that was independent of perceiver category. The interactions between perceiver and target social category were eliminated when targets were blocked by social category (e.g., a block of all White female targets; Experiments 3a and 3b) and accentuated when targets were associated with additional category information (i.e., ingroup/outgroup nationality; Experiment 4). These findings support the possibility that contextually sensitive intergroup processes influence emotion categorization. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Emociones , Expresión Facial , Reconocimiento Facial , Procesos de Grupo , Felicidad , Percepción Social , Humanos , Femenino , Masculino , Adulto , Adulto Joven , Identificación Social
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA